annual conference
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- (7 more...)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Asia > China > Shaanxi Province (0.04)
- Asia > Middle East > Jordan (0.05)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Asia > China > Shaanxi Province (0.04)
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- Europe > Austria (0.04)
- (21 more...)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany (0.04)
- (3 more...)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- (14 more...)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- (7 more...)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Sweden > Stockholm > Stockholm (0.05)
- North America > Canada > British Columbia > Vancouver (0.05)
- (9 more...)
LLM Flow Processes for Text-Conditioned Regression
Meta-learning methods for regression like Neural (Diffusion) Processes achieve impressive results, but with these models it can be difficult to incorporate expert prior knowledge and information contained in metadata. Large Language Models (LLMs) are trained on giant corpora including varied real-world regression datasets alongside their descriptions and metadata, leading to impressive performance on a range of downstream tasks. Recent work has extended this to regression tasks and is able to leverage such prior knowledge and metadata, achieving surprisingly good performance, but this still rarely matches dedicated meta-learning methods. Here we introduce a general method for sampling from a product-of-experts of a diffusion or flow matching model and an `expert' with binned probability density; we apply this to combine neural diffusion processes with LLM token probabilities for regression (which may incorporate textual knowledge), exceeding the empirical performance of either alone.
- Europe > Austria > Vienna (0.14)
- Africa > Rwanda > Kigali > Kigali (0.05)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- (10 more...)
Towards Higher Ranks via Adversarial Weight Pruning
Convolutional Neural Networks (CNNs) are hard to deploy on edge devices due to its high computation and storage complexities. As a common practice for model compression, network pruning consists of two major categories: unstructured and structured pruning, where unstructured pruning constantly performs better. However, unstructured pruning presents a structured pattern at high pruning rates, which limits its performance. To this end, we propose a Rank-based PruninG (RPG) method to maintain the ranks of sparse weights in an adversarial manner.
- Europe > Italy (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Nevada (0.04)
- (14 more...)